Update report index.html (#2099)#2101
Closed
github-actions[bot] wants to merge 701 commits intodocsfrom
Closed
Conversation
…kage-0.18.3-in-cli
…-0.18.3-in-cli updated to 0.18.3
…report-to-cli update report to 1.0.26
release/v0.18.2
…trics-in-cli fixed missing metrics in cli
release/v0.18.3
…context-in-integration added message context parsing
added sparkles emoji
…port" in various test fixtures and utility functions.
… LinksLineBlock for rendering multiple links with optional icons, and updated AlertMessageBuilder to utilize this new structure.
…tary-links Enable support for multiple links and icons in alert messages
* Update the version constrains of `google-cloud-storage` Signed-off-by: Yu Ishikawa <yu-iskw@users.noreply.github.com> * chore: tighten google-cloud-storage constraint to <3.2 for dbt-bigquery 1.10.3 compatibility --------- Signed-off-by: Yu Ishikawa <yu-iskw@users.noreply.github.com> Co-authored-by: Itamar Hartstein <haritamar@gmail.com>
* Run CLI tests on all warehouses instead of just Postgres - Add pull_request_target trigger to test-all-warehouses.yml - Add workflow_call trigger for reusability - Remove dbt 1.8.0 from matrix, run only on latest version - Update elementary-ref to use PR head SHA for pull_request_target events - Skip Slack notifications for PR failures to avoid spam - Delete test-main-warehouse.yml (superseded by test-all-warehouses.yml) Resolves ELE-5218 Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * Remove workflow_call trigger (not used by any other workflow) Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> --------- Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Co-authored-by: Itamar Hartstein <haritamar@gmail.com>
* remove python 3.9 * silence mypy error when subclassing click.MultiCommand * use newer actions/setup-python version --------- Co-authored-by: Itamar Hartstein <haritamar@gmail.com>
…2090) ClickHouse does not support standard SQL UPDATE statements. Instead, it requires ALTER TABLE ... UPDATE syntax for mutations. This change adds adapter dispatch for the update_skipped_alerts and update_sent_alerts macros to use the correct ClickHouse syntax when running on ClickHouse adapters. Fixes ClickHouse syntax error (Code 62) when running 'edr monitor'. Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Co-authored-by: Yosef Arbiv <yosef.arbiv@gmail.com> Co-authored-by: Itamar Hartstein <haritamar@gmail.com>
click.MultiCommand is deprecated and will be removed in click 9. This change migrates to click.Group which is the recommended replacement. - Changed ElementaryCLI base class from click.MultiCommand to click.Group - Replaced @click.command with @click.group decorator - Removed _CMD_MAP dictionary and list_commands() method - Added commands using cli.add_command() instead - Updated get_command() to call super().get_command() Fixes ELE-5220 Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Co-authored-by: Itamar Hartstein <haritamar@gmail.com>
* Re-enable ClickHouse in CLI tests Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * Fix prettier formatting for warehouse-type array Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * Add clickhouse to Seed e2e dbt project step Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * Fix get_elementary_database_and_schema to use actual elementary schema Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * Revert "Fix get_elementary_database_and_schema to use actual elementary schema" This reverts commit 1fc1952. * Fix ClickHouse column alias handling in populate_test_alerts macro Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * Use .get() for all field accesses in populate_test_alerts to handle ClickHouse column names Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * Revert "Use .get() for all field accesses in populate_test_alerts to handle ClickHouse column names" This reverts commit 5a7efb8. * Revert "Fix ClickHouse column alias handling in populate_test_alerts macro" This reverts commit a3bef9c. * Add debug logging to identify Undefined field in populate_test_alerts Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * Update debug logging to use 'is defined' checks for accurate Undefined detection Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * Add more detailed debug logging to find nested Undefined values Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * Fix ClickHouse column alias handling with .get() fallbacks ClickHouse returns columns with table prefix (e.g., 'failed_tests.database_name' instead of 'database_name'). Use .get() with fallbacks to handle both naming conventions for the affected fields: - database_name - schema_name - tags - test_params - severity - status - result_rows Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * Fix ClickHouse column name ambiguity with explicit SQL aliases ClickHouse returns columns with table prefix when column names are ambiguous across joined tables. The affected columns (database_name, schema_name, tags, test_params, severity, status, result_rows) exist in both failed_tests and tests tables. Fix: Add explicit 'as column_name' aliases in the SQL query to force ClickHouse to use the alias name instead of the qualified column name. This is cleaner than using .get() fallbacks in the macro. Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * Remove DBT_EDR_DEBUG env var (debugging complete) Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> --------- Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Co-authored-by: Itamar Hartstein <haritamar@gmail.com>
* dbt version bump
|
Important Review skippedBot user detected. To trigger a single review, invoke the You can disable this status message by setting the
Comment |
Co-authored-by: GitHub Actions <noreply@github.com>
#2097) * Improve fork safety: consolidate approval and add pull_request trigger - Add pull_request trigger for internal PRs (non-forks) to test workflow changes immediately - Keep pull_request_target for fork PRs that need access to secrets - Move approval step to test-all-warehouses.yml (runs once instead of per-platform) - Remove per-platform approval from test-warehouse.yml to reduce spam Fixes ELE-5221 Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * Fix: use !cancelled() and check fork-status result before running tests Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * Remove redundant comment on test job condition Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * Refactor: move should_skip logic inside PR event check block Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> --------- Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Co-authored-by: Itamar Hartstein <haritamar@gmail.com>
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Co-authored-by: Itamar Hartstein <haritamar@gmail.com>
…refs warnings (#2108) * Replace deprecated update_forward_refs() with version-conditional model_rebuild() Use model_rebuild() when running under pydantic v2, fall back to update_forward_refs() for pydantic v1 compatibility. This eliminates PydanticDeprecatedSince20 warnings in downstream projects using pydantic v2. Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * Use pydantic_shim for BaseModel in messages package Switch all messages modules from 'from pydantic import BaseModel' to 'from elementary.utils.pydantic_shim import BaseModel'. This uses the existing v1 shim so update_forward_refs() is the native v1 API call and no PydanticDeprecatedSince20 warnings are emitted under pydantic v2. Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * Fix import sorting for isort --profile black Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * Convert remaining from pydantic import BaseModel to use pydantic_shim Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * Fix isort ordering for pydantic_shim imports Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> --------- Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Co-authored-by: Itamar Hartstein <haritamar@gmail.com>
…#2109) The `get_log_path` function had a `return` statement inside a `finally` block, which silently swallows any in-flight exceptions including `BaseException` subclasses like `KeyboardInterrupt`. Replace the `try/finally` with a proper `try/except` that catches the expected `ValueError` (from `list.index()`) and `IndexError` (from accessing the next argument), then move the directory creation and return statement outside the exception handler. Fixes #1731 Co-authored-by: Itamar Hartstein <haritamar@gmail.com>
…2110) When a dbt exposure owner is configured as a plain string (e.g., `owner: username`) instead of a dict with name/email fields, the ExposureSchema validation fails because Pydantic cannot coerce a string into an OwnerSchema. Add a pre-validator on the `owner` field that converts a string value into an OwnerSchema instance with the string as the name, matching the existing pattern used by `_load_var_to_list` for other fields. Fixes #1732 Co-authored-by: Itamar Hartstein <haritamar@gmail.com>
…ne.utc) (#2111) `datetime.utcnow()` has been deprecated since Python 3.12 because it returns a naive datetime that carries no timezone information, making it error-prone when mixed with timezone-aware datetimes. Replace all 8 occurrences across 7 files (5 source, 2 test) with the recommended `datetime.now(tz=timezone.utc)` which returns a proper timezone-aware datetime. Files changed: - elementary/utils/time.py - elementary/monitor/fetchers/alerts/schema/pending_alerts.py - elementary/monitor/data_monitoring/alerts/data_monitoring_alerts.py - elementary/messages/messaging_integrations/teams_webhook.py - elementary/messages/messaging_integrations/slack_webhook.py - elementary/messages/messaging_integrations/file_system.py - tests/unit/monitor/fetchers/alerts/schemas/test_alert_data_schema.py - tests/mocks/fetchers/alerts_fetcher_mock.py Co-authored-by: themavik <themavik@users.noreply.github.com> Co-authored-by: Itamar Hartstein <haritamar@gmail.com>
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Co-authored-by: Itamar Hartstein <haritamar@gmail.com>
…_iso_format_to_full_iso_format (#2116) * fix(CORE-355): handle abbreviated timezone offsets in convert_partial_iso_format_to_full_iso_format - Add _normalize_timezone_offset to expand +HH to +HH:00 before parsing - Python 3.10 datetime.fromisoformat() rejects +00 format from PostgreSQL - Also fix typo: 'covert' -> 'convert' in error log message Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * fix: tighten regex to require time component before abbreviated tz offset Anchors the pattern to :\d{2} (seconds) or .\d+ (fractional seconds) before the +/- sign, preventing false matches on date-only strings like '2024-01-15' where '-15' would incorrectly match. Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * test: add parametrized tests for convert_partial_iso_format_to_full_iso_format Covers abbreviated offsets (+00, -05), fractional seconds, full offsets, no-offset input, and date-only strings to prevent regressions. Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> --------- Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Co-authored-by: Itamar Hartstein <haritamar@gmail.com>
…ofiles template (#2124)
…CI collisions (#2126) * fix: use py_<yymmdd>_<branch>_<hash> schema naming to prevent CI collisions Replace the old truncation-based schema naming with a hash-based approach that prevents cross-branch collisions when concurrent CI jobs share the same warehouse. Uses py_ prefix to identify the Python package CI (matching dbt_ prefix in dbt-data-reliability). Format: py_<YYMMDD>_<branch≤29>_<8-char-hash> The hash is derived from the concurrency group key. Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * style: collapse consecutive underscores in SAFE_BRANCH (CodeRabbit nitpick) Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * feat: add HHMM to schema timestamp for per-run uniqueness Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * style: use explicit UTC for timestamp (date -u) Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * style: add seconds to timestamp (YYMMDD_HHMMSS) per maintainer request Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> --------- Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Co-authored-by: Itamar Hartstein <haritamar@gmail.com>
* feat: add automatic retry for transient dbt command errors
Add per-adapter transient error detection and automatic retry logic
using tenacity to CommandLineDbtRunner._run_command.
- New module transient_errors.py with per-adapter error patterns for
BigQuery, Snowflake, Redshift, Databricks, Athena, Dremio, Postgres,
Trino, and ClickHouse, plus common connection error patterns.
- _execute_inner_command wraps _inner_run_command with tenacity retry
(3 attempts, exponential backoff 10-60s).
- Only retries when output matches a known transient error pattern for
the active adapter. Non-transient failures propagate immediately.
- Handles both raise_on_failure=True (DbtCommandError) and
raise_on_failure=False (result.success=False) code paths.
- Added tenacity>=8.0,<10.0 to pyproject.toml dependencies.
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
* fix: guard _build_haystack against non-string arguments
When tests mock _inner_run_command, result.output and result.stderr
may be MagicMock objects instead of strings. Add isinstance checks
to _build_haystack to avoid TypeError in str.join().
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
* fix: address CodeRabbit review feedback
- Fix ''.join → ' '.join for readable error messages
- Use logger.exception instead of logger.error for stack traces
- Make '503' pattern more specific ('503 service unavailable', 'http 503')
- Make 'incident' pattern more specific ('incident id:')
- Remove 'connection refused' from common patterns (too broad)
- Remove redundant dremio patterns already covered by common
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
* fix: address CodeRabbit round 2 feedback
- Always capture output for transient error detection (capture_output=True)
- Extract actual output/stderr from DbtCommandError.proc_err
- Preserve raise_on_failure contract: re-raise DbtCommandError after retries
- Deduplicate databricks/databricks_catalog patterns via _DATABRICKS_PATTERNS
- Check all adapter patterns when target is not a known adapter type
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
* fix: address CodeRabbit round 3 feedback
- Add explicit exception chaining (raise from exc) to satisfy Ruff B904
- Treat target=None as unknown target, checking all adapter patterns defensively
- Pre-compute _ALL_ADAPTER_PATTERNS at import time for efficiency
- Add unit tests for retry branch behavior (6 test cases covering
transient DbtCommandError retry+re-raise, failed result retry+return,
and non-transient immediate propagation)
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
* fix: remove unused imports and fix isort ordering in test_retry_logic
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
* style: fix black formatting in test_retry_logic imports
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
* test: add early retry success test case
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
* fix: restore original capture_output passthrough to preserve streaming output
When capture_output=False, dbt output should stream directly to the
terminal. The previous implementation always passed capture_output=True
to _inner_run_command, which silently captured output that was meant
to be streamed.
Transient-error detection still works:
- DbtCommandError path: output extracted from exc.proc_err
- Failed-result path with capture: result.output available
- Failed-result path without capture: output streamed to terminal,
treated as non-transient (user already saw output)
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
* fix: always capture output for transient detection, print to terminal when capture_output=False
Revert to always passing capture_output=True to _inner_run_command so
transient-error detection can always inspect stdout/stderr. When the
caller set capture_output=False (expecting to see output), we now
explicitly write the captured output to sys.stdout/sys.stderr after
the command completes.
This means capture_output now only controls:
- Whether --log-format json is added to dbt CLI args
- Whether output is parsed/logged via parse_dbt_output
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
* fix: guard sys.stdout/stderr.write with isinstance check
The existing test_dbt_runner tests mock subprocess.run, which causes
result.output/stderr to be MagicMock objects. Add isinstance(str)
checks before writing to stdout/stderr to avoid TypeError.
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
* refactor: always set --log-format and always capture output, make capture_output a no-op
- Always pass --log-format to dbt CLI (previously gated on capture_output)
- Always capture subprocess output (for transient-error detection)
- Always parse output when log_format is json (previously gated on capture_output)
- Remove capture_output from internal methods (_run_command, _execute_inner_command, _inner_run_command)
- Keep capture_output on public API methods (run, test, deps, run_operation) as a deprecated no-op for backward compatibility
- Remove sys.stdout/stderr.write hack (no longer needed since output is always parsed/logged)
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
* fix: update test_alerts_fetcher positional indices for --log-format prepend
The refactor to always prepend --log-format json to dbt commands shifted
all positional args by 2. Update hardcoded indices in test assertions.
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
* fix: parse output regardless of log_format, not just json
parse_dbt_output already handles both json and text formats, so remove
the unnecessary log_format == 'json' guard.
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
* fix: add BigQuery 409 duplicate job ID to transient error patterns
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
* fix: narrow BigQuery 409 pattern to 'error 409' instead of generic 'already exists'
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
* refactor: simplify retry flow with _inner_run_command_with_retries
- Replace _execute_inner_command + nested _attempt() with a single
_inner_run_command_with_retries method decorated with tenacity @Retry
- Move exhausted-retry handling (log, re-raise or return exc.result)
into _run_command try/except
- Add module-level _before_retry_log(retry_state) for retry logging;
log_command_args read from retry_state.kwargs
- Call chain: _run_command -> _inner_run_command_with_retries -> _inner_run_command
- Update test docstring to reference new method name
Made-with: Cursor
* style: fix black formatting for is_transient_error call
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
* docs: fix docstring for target=None in is_transient_error (all patterns checked, not just common)
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
* feat: resolve adapter type from profiles.yml for transient error detection
- Add _get_adapter_type() method to CommandLineDbtRunner that parses
dbt_project.yml and profiles.yml to resolve the actual adapter type
(e.g. 'bigquery', 'snowflake') for the selected target.
- Pass adapter_type instead of self.target to is_transient_error(),
ensuring correct per-adapter pattern matching.
- Remove duplicate 'databricks_catalog' entry from _ADAPTER_PATTERNS
since profiles.yml always reports the adapter type, not the profile name.
- Update docstrings to reflect that target should be the adapter type.
- Gracefully falls back to None (check all patterns) if profiles cannot
be parsed.
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
* refactor: simplify _get_adapter_type — remove broad try/except, streamline logic
Addresses Itamar's review feedback:
- Removed the over-defensive try..except wrapper
- Simplified flow: parse profiles.yml directly, then dbt_project.yml for profile name
- Each missing-key case returns None with a debug log (no silent exception swallowing)
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
* refactor: rename target→adapter_type in is_transient_error signature
Addresses Itamar's review comment — the parameter now reflects that it
receives the adapter type (e.g. 'bigquery'), not the profile target name
(e.g. 'dev'). No logic change; callers pass it positionally.
Co-Authored-By: Itamar Hartstein <haritamar@gmail.com>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Itamar Hartstein <haritamar@gmail.com>
…chema cleanup (#2127) * perf: optimize CI by skipping redundant dbt test artifacts upload and adding schema cleanup - Add --vars to dbt test step to disable artifact autoupload (already done during dbt run) and skip per-table temp table cleanup (schema will be dropped at end) - Add drop_test_schemas macro and CI step to drop both main and elementary schemas after tests - This significantly reduces on-run-end hook time, especially for Athena where each query has high overhead (~3-5s) and the hook was issuing 100+ individual queries Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * fix: correct macro namespace to elementary_integration_tests Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * fix: update macro namespace to elementary_integration_tests in drop_test_schemas.sql Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * style: quote ClickHouse schema identifier in DROP statement Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * refactor: move clean_elementary_temp_tables to dbt_project.yml per review feedback - clean_elementary_temp_tables: false now set in dbt_project.yml (applies to all steps) - disable_dbt_artifacts_autoupload: true remains on CLI for dbt test step only (dbt run still needs to upload artifacts) Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * refactor: move disable_dbt_artifacts_autoupload to dbt_project.yml Both vars now in dbt_project.yml, no --vars needed on CLI. disable_dbt_artifacts_autoupload only gates the on-run-end hook's upload_dbt_artifacts() call, so it's safe to set globally. Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> --------- Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Co-authored-by: Itamar Hartstein <haritamar@gmail.com>
…tion (#2129) * fix: surface exception text in APIDbtRunner for transient error detection APIDbtRunner only captures JinjaLogInfo and RunningOperationCaughtError events into the output field. When a command fails with a transient error (e.g. RemoteDisconnected), the error text lives in res.exception — not in the captured output. This means _inner_run_command_with_retries has nothing to match against and never fires the retry. Fix: extract res.exception text and put it into the stderr field of APIDbtCommandResult (the dbt Python API doesn't use stderr). This allows the transient error detection in _inner_run_command_with_retries to examine the exception text, analogous to how SubprocessDbtRunner captures subprocess stderr. Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * style: address CodeRabbit review - remove /tmp paths and prefix unused args Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * style: fix black formatting - wrap long function signatures Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * style: fix black formatting - keep short signature on one line Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> --------- Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Co-authored-by: Itamar Hartstein <haritamar@gmail.com>
Fixes #2037 The --config-dir flag was parsed but never passed to Config(), which always used the default ~/.edr. This wires the CLI argument through so the custom config directory is actually used. - Remove premature directory creation from Config._load_configuration that ran at import time (logo/upgrade) and created ~/.edr before CLI parsing - Create config_dir in anonymous_tracking when writing user id file - Add --config-dir to debug and dbt_init commands for consistency - Use keyword args for Config() in report command for clarity Co-authored-by: Itamar Hartstein <haritamar@gmail.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.